Goto

Collaborating Authors

 homology group


Persistent reachability homology in machine learning applications

Caputi, Luigi, Meadows, Nicholas, Riihimäki, Henri

arXiv.org Artificial Intelligence

We explore the recently introduced persistent reachability homology (PRH) of digraph data, i.e. data in the form of directed graphs. In particular, we study the effectiveness of PRH in network classification task in a key neuroscience problem: epilepsy detection. PRH is a variation of the persistent homology of digraphs, more traditionally based on the directed flag complex (DPH). A main advantage of PRH is that it considers the condensations of the digraphs appearing in the persistent filtration and thus is computed from smaller digraphs. We compare the effectiveness of PRH to that of DPH and we show that PRH outperforms DPH in the classification task. We use the Betti curves and their integrals as topological features and implement our pipeline on support vector machine.


Torsion in Persistent Homology and Neural Networks

Walch, Maria

arXiv.org Artificial Intelligence

W e explore the role of torsion in hybrid deep learning models that incorporate topological data analysis, focusing on autoencoders. While most TDA tools use field coefficients, this conceals torsional features present in integer homology . W e show that torsion can be lost during encoding, altered in the latent space, and in many cases, not reconstructed by standard decoders. Using both synthetic and high-dimensional data, we evaluate torsion sensitivity to perturbations and assess its recoverability across several autoencoder architectures. Our findings reveal key limitations of field-based approaches and underline the need for architectures or loss terms that preserve torsional information for robust data representation.


Data analysis using discrete cubical homology

Kapulkin, Chris, Kershaw, Nathan

arXiv.org Artificial Intelligence

It is a highly intuitive and powerful tool, based on a simple observation from topology that data has a shape and understanding this shape is key to analyzing the data. The main tool of topological data analysis is persistence homology, a way of quantifying n -dimensional "holes" obtained from the data in question. The essential premise behind persistence homology is known as topological inference, which is the assumption that the data is a finite sample from some large (typically infinite) topological space. One goal, therefore, is to reproduce characteristics or features of that space from the finite sample. The process typically involves several steps, namely: building a filtered graph (often called the Rips graph) from a finite metric space, taking the associated filtration of flag complexes, and finally computing homology groups of these complexes. It is worth observing that in this pipeline, (filtered) graphs are merely an intermediate step in the construction, and not an object of independent interest. In contrast, in the present paper, we work with data that is most naturally presented in the form of a graph, without an assumption that it was sampled from a topological space. For example, consider a collection of time series of stock prices of all companies traded on a given exchange.


A Relative Homology Theory of Representation in Neural Networks

Beshkov, Kosio

arXiv.org Artificial Intelligence

Previous research has proven that the set of maps implemented by neural networks with a ReLU activation function is identical to the set of piecewise linear continuous maps. Furthermore, such networks induce a hyperplane arrangement splitting the input domain into convex polyhedra $G_J$ over which the network $\Phi$ operates in an affine manner. In this work, we leverage these properties to define the equivalence class of inputs $\sim_\Phi$, which can be split into two sets related to the local rank of $\Phi_J$ and the intersections $\cap \text{Im}\Phi_{J_i}$. We refer to the latter as the overlap decomposition $O_\Phi$ and prove that if the intersections between each polyhedron and the input manifold are convex, the homology groups of neural representations are isomorphic to relative homology groups $H_k(\Phi(M)) \simeq H_k(M,O_\Phi)$. This lets us compute Betti numbers without the choice of an external metric. We develop methods to numerically compute the overlap decomposition through linear programming and a union-find algorithm. Using this framework, we perform several experiments on toy datasets showing that, compared to standard persistent homology, our relative homology-based computation of Betti numbers tracks purely topological rather than geometric features. Finally, we study the evolution of the overlap decomposition during training on various classification problems while varying network width and depth and discuss some shortcomings of our method.


On Logical Extrapolation for Mazes with Recurrent and Implicit Networks

Knutson, Brandon, Rabeendran, Amandin Chyba, Ivanitskiy, Michael, Pettyjohn, Jordan, Diniz-Behn, Cecilia, Fung, Samy Wu, McKenzie, Daniel

arXiv.org Machine Learning

Recent work has suggested that certain neural network architectures-particularly recurrent neural networks (RNNs) and implicit neural networks (INNs) are capable of logical extrapolation. That is, one may train such a network on easy instances of a specific task and then apply it successfully to more difficult instances of the same task. In this paper, we revisit this idea and show that (i) The capacity for extrapolation is less robust than previously suggested. Specifically, in the context of a maze-solving task, we show that while INNs (and some RNNs) are capable of generalizing to larger maze instances, they fail to generalize along axes of difficulty other than maze size. (ii) Models that are explicitly trained to converge to a fixed point (e.g. the INN we test) are likely to do so when extrapolating, while models that are not (e.g. the RNN we test) may exhibit more exotic limiting behaviour such as limit cycles, even when they correctly solve the problem. Our results suggest that (i) further study into why such networks extrapolate easily along certain axes of difficulty yet struggle with others is necessary, and (ii) analyzing the dynamics of extrapolation may yield insights into designing more efficient and interpretable logical extrapolators.


Bi-Filtration and Stability of TDA Mapper for Point Cloud Data

Bungula, Wako, Darcy, Isabel

arXiv.org Machine Learning

Carlsson, Singh and Memoli's TDA mapper takes a point cloud dataset and outputs a graph that depends on several parameter choices. Dey, Memoli, and Wang developed Multiscale Mapper for abstract topological spaces so that parameter choices can be analyzed via persistent homology. However, when applied to actual data, one does not always obtain filtrations of mapper graphs. DBSCAN, one of the most common clustering algorithms used in the TDA mapper software, has two parameters, \textbf{$\epsilon$} and \textbf{MinPts}. If \textbf{MinPts = 1} then DBSCAN is equivalent to single linkage clustering with cutting height \textbf{$\epsilon$}. We show that if DBSCAN clustering is used with \textbf{MinPts $>$ 2}, a filtration of mapper graphs may not exist except in the absence of free-border points; but such filtrations exist if DBSCAN clustering is used with \textbf{MinPts = 1} or \textbf{2} as the cover size increases, \textbf{$\epsilon$} increases, and/or \textbf{MinPts} decreases. However, the 1-dimensional filtration is unstable. If one adds noise to a data set so that each data point has been perturbed by a distance at most \textbf{$\delta$}, the persistent homology of the mapper graph of the perturbed data set can be significantly different from that of the original data set. We show that we can obtain stability by increasing both the cover size and \textbf{$\epsilon$} at the same time. In particular, we show that the bi-filtrations of the homology groups with respect to cover size and $\epsilon$ between these two datasets are \textbf{2$\delta$}-interleaved.


Characterization of topological structures in different neural network architectures

Świder, Paweł

arXiv.org Artificial Intelligence

One of the most crucial tasks in the future will be to understand what is going on in neural networks, as they will become even more powerful and widely deployed. This work aims to use TDA methods to analyze neural representations. We develop methods for analyzing representations from different architectures and check how one should use them to obtain valid results. Our findings indicate that removing outliers does not have much impact on the results and that we should compare representations with the same number of elements. We applied these methods for ResNet, VGG19, and ViT architectures and found substantial differences along with some similarities. Additionally, we determined that models with similar architecture tend to have a similar topology of representations and models with a larger number of layers change their topology more smoothly. Furthermore, we found that the topology of pre-trained and finetuned models starts to differ in the middle and final layers while remaining quite similar in the initial layers. These findings demonstrate the efficacy of TDA in the analysis of neural network behavior.


A rank decomposition for the topological classification of neural representations

Beshkov, Kosio, Einevoll, Gaute T.

arXiv.org Artificial Intelligence

Neural networks can be thought of as applying a transformation to an input dataset. The way in which they change the topology of such a dataset often holds practical significance for many tasks, particularly those demanding non-homeomorphic mappings for optimal solutions, such as classification problems. In this work, we leverage the fact that neural networks are equivalent to continuous piecewise-affine maps, whose rank can be used to pinpoint regions in the input space that undergo non-homeomorphic transformations, leading to alterations in the topological structure of the input dataset. Our approach enables us to make use of the relative homology sequence, with which one can study the homology groups of the quotient of a manifold $\mathcal{M}$ and a subset $A$, assuming some minimal properties on these spaces. As a proof of principle, we empirically investigate the presence of low-rank (topology-changing) affine maps as a function of network width and mean weight. We show that in randomly initialized narrow networks, there will be regions in which the (co)homology groups of a data manifold can change. As the width increases, the homology groups of the input manifold become more likely to be preserved. We end this part of our work by constructing highly non-random wide networks that do not have this property and relating this non-random regime to Dale's principle, which is a defining characteristic of biological neural networks. Finally, we study simple feedforward networks trained on MNIST, as well as on toy classification and regression tasks, and show that networks manipulate the topology of data differently depending on the continuity of the task they are trained on.


Quantifying Manifolds: Do the manifolds learned by Generative Adversarial Networks converge to the real data manifold

Chaudhuri, Anupam, Simmons, Anj, Abdelrazek, Mohamed

arXiv.org Artificial Intelligence

This paper presents our experiments to quantify the manifolds learned by ML models (in our experiment, we use a GAN model) as they train. We compare the manifolds learned at each epoch to the real manifolds representing the real data. To quantify a manifold, we study the intrinsic dimensions and topological features of the manifold learned by the ML model, how these metrics change as we continue to train the model, and whether these metrics convergence over the course of training to the metrics of the real data manifold.


Image Classification using Combination of Topological Features and Neural Networks

Lima, Mariana Dória Prata, Giraldi, Gilson Antonio, Junior, Gastão Florêncio Miranda

arXiv.org Artificial Intelligence

In this work we use the persistent homology method, a technique in topological data analysis (TDA), to extract essential topological features from the data space and combine them with deep learning features for classification tasks. In TDA, the concepts of complexes and filtration are building blocks. Firstly, a filtration is constructed from some complex. Then, persistent homology classes are computed, and their evolution along the filtration is visualized through the persistence diagram. Additionally, we applied vectorization techniques to the persistence diagram to make this topological information compatible with machine learning algorithms. This was carried out with the aim of classifying images from multiple classes in the MNIST dataset. Our approach inserts topological features into deep learning approaches composed by single and two-streams neural networks architectures based on a multi-layer perceptron (MLP) and a convolutional neral network (CNN) taylored for multi-class classification in the MNIST dataset. In our analysis, we evaluated the obtained results and compared them with the outcomes achieved through the baselines that are available in the TensorFlow library. The main conclusion is that topological information may increase neural network accuracy in multi-class classification tasks with the price of computational complexity of persistent homology calculation. Up to the best of our knowledge, it is the first work that combines deep learning features and the combination of topological features for multi-class classification tasks.